Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 254
Filtrar
1.
Parkinsonism Relat Disord ; 123: 106944, 2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38552350

RESUMO

BACKGROUND: Individuals with Parkinson's Disease (IwPD) often fail to adjust their voice in different situations, without awareness of this limitation. Clinicians use self-report questionnaires that are typically designed for individuals with General Voice Disorders (GVD) in the vocal assessment of IwPD. However, these instruments may not consider that IwPD have a reduced self-perception of their vocal deficits. This study aimed to compare self-reported vocal symptoms and voice loudness between IwPD and GVD. METHODS: 28 IwPD and 26 with GVD completed the Voice Symptom Scale (VoiSS) questionnaire to evaluate their voice self-perception. Vocal loudness (dB) was also assessed. Univariate and multivariate analyses were used to compare the outcomes from these measures between the two groups. Principal Component Analysis and Hierarchical Clustering Analysis were applied to explore data patterns related to voice symptoms. RESULTS: IwPD reported significantly fewer vocal symptoms than those with GVD in all VoiSS questionnaire domains. Multivariate principal component analysis found no significant correlations between VoiSS scores and participant similarities in voice measures. Despite experiencing hypophonia, IwPD scored lower in all VoiSS domains but still fell in the healthy voice range. Hierarchical Clustering Analysis grouped participants into three distinct categories, primarily based on age, vocal loudness, and VoiSS domain scores, distinguishing between PD and GVD individuals. CONCLUSIONS: IwPD reported fewer vocal symptoms than GVD. The voice self-assessment seems to be unreliable to assess vocal symptoms in IwPD, at least regarding loudness. New self-report instruments tailored to PD individuals are needed due to their particular voice characteristics.

2.
Behav Res Methods ; 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459221

RESUMO

Timing and rhythm abilities are complex and multidimensional skills that are highly widespread in the general population. This complexity can be partly captured by the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery, consisting of four perceptual and five sensorimotor tests (finger-tapping), has been used in healthy adults and in clinical populations (e.g., Parkinson's disease, ADHD, developmental dyslexia, stuttering), and shows sensitivity to individual differences and impairment. However, major limitations for the generalized use of this tool are the lack of reliable and standardized norms and of a version of the battery that can be used outside the lab. To circumvent these caveats, we put forward a new version of BAASTA on a tablet device capable of ensuring lab-equivalent measurements of timing and rhythm abilities. We present normative data obtained with this version of BAASTA from over 100 healthy adults between the ages of 18 and 87 years in a test-retest protocol. Moreover, we propose a new composite score to summarize beat-based rhythm capacities, the Beat Tracking Index (BTI), with close to excellent test-retest reliability. BTI derives from two BAASTA tests (beat alignment, paced tapping), and offers a swift and practical way of measuring rhythmic abilities when research imposes strong time constraints. This mobile BAASTA implementation is more inclusive and far-reaching, while opening new possibilities for reliable remote testing of rhythmic abilities by leveraging accessible and cost-efficient technologies.

3.
Sci Rep ; 14(1): 1135, 2024 01 11.
Artigo em Inglês | MEDLINE | ID: mdl-38212632

RESUMO

Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.


Assuntos
Dança , Música , Humanos , Percepção Auditiva , Som
4.
Cortex ; 170: 53-56, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38101972

RESUMO

Since its inception 60 years ago, the mission of Cortex has been to foster a better understanding of cognition and the relationship between the nervous system, behavior in general, and mental processes in particular. Almost 15 years ago, we submitted "a review and proposal" along these lines to the journal, in which we sought to integrate two components that are not often discussed together, namely the basal ganglia and syntactic language functions (Kotz et al., 2009). One of the main motivations was to find potential explanations for two relatively straightforward earlier empirical observations: (i) electroencephalographic event-related potential responses (EEG/ERPs) known to be sensitive markers of syntactic violations in auditory language processing were found to be absent in persons with focal basal ganglia lesions (Friederici et al., 1999; Frisch et al., 2003; Kotz et al., 2003), and (ii) temporally regular rhythmic tone sequences presented before language stimuli were found to compensate for this effect (Kotz et al., 2005; Kotz & Gunter, 2015; Kotz & Schmidt-Kassow, 2015). The critical question was how to reconcile these specific components, the basal ganglia typically associated with motor behavior and language-related syntactic processes, under one hood to foster a better understanding of how the basal ganglia system contributes to auditory language processing. This core question was the starting point for further own research and trying to solve it, unsurprisingly, led to many more questions and rather few answers. It also changed perspectives and established collaborative efforts, sometimes in unsuspected ways and directions. In light of the journal's anniversary, we therefore want to take this exciting opportunity for some time travel, looking back at our original conception while linking it to more recent considerations, thereby providing some insights that might be useful for future research.


Assuntos
Potenciais Evocados , Idioma , Humanos , Percepção Auditiva/fisiologia , Eletroencefalografia , Gânglios da Base
5.
Brain Lang ; 246: 105345, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37994830

RESUMO

Based on the idea that neural entrainment establishes regular attentional fluctuations that facilitate hierarchical processing in both music and language, we hypothesized that individual differences in syntactic (grammatical) skills will be partly explained by patterns of neural responses to musical rhythm. To test this hypothesis, we recorded neural activity using electroencephalography (EEG) while children (N = 25) listened passively to rhythmic patterns that induced different beat percepts. Analysis of evoked beta and gamma activity revealed that individual differences in the magnitude of neural responses to rhythm explained variance in six-year-olds' expressive grammar abilities, beyond and complementarily to their performance in a behavioral rhythm perception task. These results reinforce the idea that mechanisms of neural beat entrainment may be a shared neural resource supporting hierarchical processing across music and language and suggest a relevant marker of the relationship between rhythm processing and grammar abilities in elementary-school-age children, previously observed only behaviorally.


Assuntos
Individualidade , Música , Humanos , Criança , Percepção Auditiva/fisiologia , Linguística , Eletroencefalografia , Idioma
6.
Sci Rep ; 13(1): 21064, 2023 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-38030693

RESUMO

Sensorimotor synchronization strategies have been frequently used for gait rehabilitation in different neurological populations. Despite these positive effects on gait, attentional processes required to dynamically attend to the auditory stimuli needs elaboration. Here, we investigate auditory attention in neurological populations compared to healthy controls quantified by EEG recordings. Literature was systematically searched in databases PubMed and Web of Science. Inclusion criteria were investigation of auditory attention quantified by EEG recordings in neurological populations in cross-sectional studies. In total, 35 studies were included, including participants with Parkinson's disease (PD), stroke, Traumatic Brain Injury (TBI), Multiple Sclerosis (MS), Amyotrophic Lateral Sclerosis (ALS). A meta-analysis was performed on P3 amplitude and latency separately to look at the differences between neurological populations and healthy controls in terms of P3 amplitude and latency. Overall, neurological populations showed impairments in auditory processing in terms of magnitude and delay compared to healthy controls. Consideration of individual auditory processes and thereafter selecting and/or designing the auditory structure during sensorimotor synchronization paradigms in neurological physical rehabilitation is recommended.


Assuntos
Atenção , Doença de Parkinson , Humanos , Estudos Transversais , Marcha , Eletroencefalografia
7.
Prog Neurobiol ; 229: 102502, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37442410

RESUMO

Many animal species show comparable abilities to detect basic rhythms and produce rhythmic behavior. Yet, the capacities to process complex rhythms and synchronize rhythmic behavior appear to be species-specific: vocal learning animals can, but some primates might not. This discrepancy is of high interest as there is a putative link between rhythm processing and the development of sophisticated sensorimotor behavior in humans. Do our closest ancestors show comparable endogenous dispositions to sample the acoustic environment in the absence of task instructions and training? We recorded EEG from macaque monkeys and humans while they passively listened to isochronous equitone sequences. Individual- and trial-level analyses showed that macaque monkeys' and humans' delta-band neural oscillations encoded and tracked the timing of auditory events. Further, mu- (8-15 Hz) and beta-band (12-20 Hz) oscillations revealed the superimposition of varied accentuation patterns on a subset of trials. These observations suggest convergence in the encoding and dynamic attending of temporal regularities in the acoustic environment, bridging a gap in the phylogenesis of rhythm cognition.


Assuntos
Percepção Auditiva , Macaca , Animais , Humanos , Estimulação Acústica , Haplorrinos , Acústica , Eletroencefalografia
8.
Front Neurosci ; 17: 1193402, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37483346

RESUMO

Introduction: Auditory verbal hallucinations (AVHs), or hearing non-existent voices, are a common symptom in psychosis. Recent research suggests that AVHs are also experienced by neurotypical individuals. Individuals with schizophrenia experiencing AVHs and neurotypicals who are highly prone to hallucinate both produce false positive responses in auditory signal detection. These findings suggest that voice-hearing may lie on a continuum with similar mechanisms underlying AVHs in both populations. Methods: The current study used a monaural auditory stimulus in a signal detection task to test to what extent experimentally induced verbal hallucinations are (1) left-lateralised (i.e., more likely to occur when presented to the right ear compared to the left ear due to the left-hemisphere dominance for language processing), and (2) predicted by self-reported hallucination proneness and auditory imagery tendencies. In a conditioning task, fifty neurotypical participants associated a negative word on-screen with the same word being played via headphones through successive simultaneous audio-visual presentations. A signal detection task followed where participants were presented with a target word on-screen and indicated whether they heard the word being played concurrently amongst white noise. Results: Results showed that Pavlovian audio-visual conditioning reliably elicited a significant number of false positives (FPs). However, FP rates, perceptual sensitivities, and response biases did not differ between either ear. They were neither predicted by hallucination proneness nor auditory imagery. Discussion: The results show that experimentally induced FPs in neurotypicals are not left-lateralised, adding further weight to the argument that lateralisation may not be a defining feature of hallucinations in clinical or non-clinical populations. The findings also support the idea that AVHs may be a continuous phenomenon that varies in severity and frequency across the population. Studying induced AVHs in neurotypicals may help identify the underlying cognitive and neural mechanisms contributing to AVHs in individuals with psychotic disorders.

9.
Phys Life Rev ; 46: 131-151, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37419011

RESUMO

Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.


Assuntos
Evolução Biológica , Hominidae , Animais , Humanos , Filogenia , Comportamento Social
10.
Cognition ; 239: 105537, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37487303

RESUMO

Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardiorespiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). Factor scores of AE were significantly higher in the AV condition in both experiments. LF/HF ratio, a heart rhythm that represents activation of the sympathetic nervous system, was higher in the AO condition, suggesting increased arousal, likely caused by less predictable sound onsets in the AO condition. We present partial evidence that breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer's movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus ('smiling') muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.


Assuntos
Música , Humanos , Percepção Auditiva/fisiologia , Nível de Alerta/fisiologia , Sistema Nervoso Simpático , Movimento
11.
Appl Neuropsychol Adult ; : 1-10, 2023 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-37453801

RESUMO

The Sensory Gating Inventory (SGI) is an established self-report questionnaire that is used to assess the capacity for filtering redundant or irrelevant environmental stimuli. Translation and cross-cultural validation of the SGI are necessary to make this tool available to Dutch speaking populations. This study, therefore, aimed to design and validate a Dutch Sensory Gating Inventory (D-SGI). To this end, a forward-backward translation was performed and 469 native Dutch speakers filled in the questionnaire. A confirmatory factor analysis assessed the psychometric properties of the D-SGI. Additionally, test-retest reliability was measured. Results confirmed satisfactory similarity between the original English SGI and the D-SGI in terms of psychometric properties for the factor structure. Internal consistency and discriminant validity were also satisfactory. Overall test-retest reliability was excellent (ICC = 0.91, p < 0.001, 95% CI [0.87-0.93]). These findings confirm that the D-SGI is a psychometrically sound self-report measure that allows assessing the phenomenological dimensions of sensory gating in Dutch. Moreover, the D-SGI is publicly available. This establishes the D-SGI as a new tool for the assessment of sensory gating dimensions in general- and clinical Dutch speaking populations.

12.
Eur J Neurosci ; 58(1): 2297-2314, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37122233

RESUMO

Several theories of predictive processing propose reduced sensory and neural responses to anticipated events. Support comes from magnetoencephalography/electroencephalography (M/EEG) studies, showing reduced auditory N1 and P2 responses to self-generated compared to externally generated events, or when the timing and form of stimuli are more predictable. The current study examined the sensitivity of N1 and P2 responses to statistical speech regularities. We employed a motor-to-auditory paradigm comparing event-related potential (ERP) responses to externally and self-triggered pseudowords. Participants were presented with a cue indicating which button to press (motor-auditory condition) or which pseudoword would be presented (auditory-only condition). Stimuli consisted of the participant's own voice uttering pseudowords that varied in phonotactic probability and syllable stress. We expected to see N1 and P2 suppression for self-triggered stimuli, with greater suppression effects for more predictable features such as high phonotactic probability and first-syllable stress in pseudowords. In a temporal principal component analysis (PCA), we observed an interaction between syllable stress and condition for the N1, where second-syllable stress items elicited a larger N1 than first-syllable stress items, but only for externally generated stimuli. We further observed an effect of syllable stress on the P2, where first-syllable stress items elicited a larger P2. Strikingly, we did not observe motor-induced suppression for self-triggered stimuli for either the N1 or P2 component, likely due to the temporal predictability of the stimulus onset in both conditions. Taking into account previous findings, the current results suggest that sensitivity to syllable stress regularities depends on task demands.


Assuntos
Potenciais Evocados Auditivos , Fala , Humanos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia
13.
Behav Brain Res ; 450: 114498, 2023 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-37201892

RESUMO

The medial geniculate body (MGB) of the thalamus is an obligatory relay for auditory processing. A breakdown of adaptive filtering and sensory gating at this level may lead to multiple auditory dysfunctions, while high-frequency stimulation (HFS) of the MGB might mitigate aberrant sensory gating. To further investigate the sensory gating functions of the MGB, this study (i) recorded electrophysiological evoked potentials in response to continuous auditory stimulation, and (ii) assessed the effect of MGB HFS on these responses in noise-exposed and control animals. Pure-tone sequences were presented to assess differential sensory gating functions associated with stimulus pitch, grouping (pairing), and temporal regularity. Evoked potentials were recorded from the MGB and acquired before and after HFS (100 Hz). All animals (unexposed and noise-exposed, pre- and post-HFS) showed gating for pitch and grouping. Unexposed animals also showed gating for temporal regularity not found in noise-exposed animals. Moreover, only noise-exposed animals showed restoration comparable to the typical EP amplitude suppression following MGB HFS. The current findings confirm adaptive thalamic sensory gating based on different sound characteristics and provide evidence that temporal regularity affects MGB auditory signaling.


Assuntos
Córtex Auditivo , Tálamo , Ratos , Animais , Tálamo/fisiologia , Corpos Geniculados/fisiologia , Estimulação Acústica , Sensação , Filtro Sensorial , Córtex Auditivo/fisiologia
14.
PLoS One ; 18(3): e0283221, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36952462

RESUMO

Some people report being able to spontaneously "time" the end of their sleep. This ability to self-awaken challenges the idea of sleep as a passive cognitive state. Yet, current evidence on this phenomenon is limited, partly because of the varied definitions of self-awakening and experimental approaches used to study it. Here, we provide a review of the literature on self-awakening. Our aim is to i) contextualise the phenomenon, ii) propose an operating definition, and iii) summarise the scientific approaches used so far. The literature review identified 17 studies on self-awakening. Most of them adopted an objective sleep evaluation (76%), targeted nocturnal sleep (76%), and used a single criterion to define the success of awakening (82%); for most studies, this corresponded to awakening occurring in a time window of 30 minutes around the expected awakening time. Out of 715 total participants, 125 (17%) reported to be self-awakeners, with an average age of 23.24 years and a slight predominance of males compared to females. These results reveal self-awakening as a relatively rare phenomenon. To facilitate the study of self-awakening, and based on the results of the literature review, we propose a quick paper-and-pencil screening questionnaire for self-awakeners and provide an initial validation for it. Taken together, the combined results of the literature review and the proposed questionnaire help in characterising a theoretical framework for self-awakenings, while providing a useful tool and empirical suggestions for future experimental studies, which should ideally employ objective measurements.


Assuntos
Sono , Sugestão , Masculino , Feminino , Humanos , Adulto Jovem , Adulto , Vigília
15.
Emotion ; 23(2): 569-588, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35298222

RESUMO

Appraisals can be influenced by cultural beliefs and stereotypes. In line with this, past research has shown that judgments about the emotional expression of a face are influenced by the face's sex, and vice versa that judgments about the sex of a person somewhat depend on the person's facial expression. For example, participants associate anger with male faces, and female faces with happiness or sadness. However, the strength and the bidirectionality of these effects remain debated. Moreover, the interplay of a stimulus' emotion and sex remains mostly unknown in the auditory domain. To investigate these questions, we created a novel stimulus set of 121 avatar faces and 121 human voices (available at https://bit.ly/2JkXrpy) with matched, fine-scale changes along the emotional (happy to angry) and sexual (male to female) dimensions. In a first experiment (N = 76), we found clear evidence for the mutual influence of facial emotion and sex cues on ratings, and moreover for larger implicit (task-irrelevant) effects of stimulus' emotion than of sex. These findings were replicated and extended in two preregistered studies-one laboratory categorization study using the same face stimuli (N = 108; https://osf.io/ve9an), and one online study with vocalizations (N = 72; https://osf.io/vhc9g). Overall, results show that the associations of maleness-anger and femaleness-happiness exist across sensory modalities, and suggest that emotions expressed in the face and voice cannot be entirely disregarded, even when attention is mainly focused on determining stimulus' sex. We discuss the relevance of these findings for cognitive and neural models of face and voice processing. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Emoções , Julgamento , Masculino , Humanos , Feminino , Felicidade , Ira , Tristeza , Expressão Facial
16.
Cortex ; 158: 83-95, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36473276

RESUMO

Both self-voice and emotional speech are salient signals that are prioritized in perception. Surprisingly, self-voice perception has been investigated to a lesser extent than the self-face. Therefore, it remains to be clarified whether self-voice prioritization is boosted by emotion, and whether self-relevance and emotion interact differently when attention is focused on who is speaking vs. what is being said. Thirty participants listened to 210 prerecorded words spoken in one's own or an unfamiliar voice and differing in emotional valence in two tasks, manipulating the attention focus on either speaker identity or speech emotion. Event-related potentials (ERP) of the electroencephalogram (EEG) informed on the temporal dynamics of self-relevance, emotion, and attention effects. Words spoken in one's own voice elicited a larger N1 and Late Positive Potential (LPP), but smaller N400. Identity and emotion interactively modulated the P2 (self-positivity bias) and LPP (self-negativity bias). Attention to speaker identity modulated more strongly ERP responses within 600 ms post-word onset (N1, P2, N400), whereas attention to speech emotion altered the late component (LPP). However, attention did not modulate the interaction of self-relevance and emotion. These findings suggest that the self-voice is prioritized for neural processing at early sensory stages, and that both emotion and attention shape self-voice prioritization in speech processing. They also confirm involuntary processing of salient signals (self-relevance and emotion) even in situations in which attention is deliberately directed away from those cues. These findings have important implications for a better understanding of symptoms thought to arise from aberrant self-voice monitoring such as auditory verbal hallucinations.


Assuntos
Percepção da Fala , Voz , Humanos , Masculino , Feminino , Fala , Eletroencefalografia , Potenciais Evocados/fisiologia , Voz/fisiologia , Emoções/fisiologia , Alucinações/psicologia , Percepção da Fala/fisiologia
17.
Brain Lang ; 236: 105218, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36571932

RESUMO

Inconsistent information can be hard to understand, but in cases like fiction readers can integrate it with little to no difficulties. The present study aimed at examining if perspective switching can take place when only a minimal fictional description is provided (fictional world condition), as compared with general world knowledge (real world condition). Participants read sentences where food items had animated or inanimate features while EEG was recorded and performed a sentence completion task to evaluate recall. In the real-world condition, the N400 was significantly larger for sentences incongruent, rather than congruent, with general world knowledge. In the fictional world condition, the N400 elicited by congruent and incongruent sentences did not differ, confirming that the minimal description impacted online information processing. Information consistent with general knowledge was better recalled in both conditions. The current results highlight how contextual information is integrated during sentence comprehension.


Assuntos
Queijo , Eletroencefalografia , Humanos , Masculino , Feminino , Potenciais Evocados , Ciúme , Compreensão , Semântica
18.
Commun Biol ; 5(1): 1272, 2022 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-36402843

RESUMO

Auditory recognition is a crucial cognitive process that relies on the organization of single elements over time. However, little is known about the spatiotemporal dynamics underlying the conscious recognition of auditory sequences varying in complexity. To study this, we asked 71 participants to learn and recognize simple tonal musical sequences and matched complex atonal sequences while their brain activity was recorded using magnetoencephalography (MEG). Results reveal qualitative changes in neural activity dependent on stimulus complexity: recognition of tonal sequences engages hippocampal and cingulate areas, whereas recognition of atonal sequences mainly activates the auditory processing network. Our findings reveal the involvement of a cortico-subcortical brain network for auditory recognition and support the idea that stimulus complexity qualitatively alters the neural pathways of recognition memory.


Assuntos
Magnetoencefalografia , Reconhecimento Psicológico , Humanos , Magnetoencefalografia/métodos , Estimulação Acústica/métodos , Percepção Auditiva , Encéfalo/fisiologia
19.
Front Hum Neurosci ; 16: 859731, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35966990

RESUMO

Voices are a complex and rich acoustic signal processed in an extensive cortical brain network. Specialized regions within this network support voice perception and production and may be differentially affected in pathological voice processing. For example, the experience of hallucinating voices has been linked to hyperactivity in temporal and extra-temporal voice areas, possibly extending into regions associated with vocalization. Predominant self-monitoring hypotheses ascribe a primary role of voice production regions to auditory verbal hallucinations (AVH). Alternative postulations view a generalized perceptual salience bias as causal to AVH. These theories are not mutually exclusive as both ascribe the emergence and phenomenology of AVH to unbalanced top-down and bottom-up signal processing. The focus of the current study was to investigate the neurocognitive mechanisms underlying predisposition brain states for emergent hallucinations, detached from the effects of inner speech. Using the temporal voice area (TVA) localizer task, we explored putative hypersalient responses to passively presented sounds in relation to hallucination proneness (HP). Furthermore, to avoid confounds commonly found in in clinical samples, we employed the Launay-Slade Hallucination Scale (LSHS) for the quantification of HP levels in healthy people across an experiential continuum spanning the general population. We report increased activation in the right posterior superior temporal gyrus (pSTG) during the perception of voice features that positively correlates with increased HP scores. In line with prior results, we propose that this right-lateralized pSTG activation might indicate early hypersensitivity to acoustic features coding speaker identity that extends beyond own voice production to perception in healthy participants prone to experience AVH.

20.
Cogn Affect Behav Neurosci ; 22(6): 1250-1263, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35879595

RESUMO

Stimuli that evoke emotions are salient, draw attentional resources, and facilitate situationally appropriate behavior in complex or conflicting environments. However, negative and positive emotions may motivate different response strategies. For example, a threatening stimulus might evoke avoidant behavior, whereas a positive stimulus may prompt approaching behavior. Therefore, emotional stimuli might either elicit differential behavioral responses when a conflict arises or simply mark salience. The present study used functional magnetic resonance imaging to investigate valence-specific emotion effects on attentional control in conflict processing by employing an adapted flanker task with neutral, negative, and positive stimuli. Slower responses were observed for incongruent than congruent trials. Neural activity in the dorsal anterior cingulate cortex was associated with conflict processing regardless of emotional stimulus quality. These findings confirm that both negative and positive emotional stimuli mark salience in both low (congruent) and high (incongruent) conflict scenarios. Regardless of the conflict level, emotional stimuli deployed greater attentional resources in goal directed behavior.


Assuntos
Conflito Psicológico , Giro do Cíngulo , Humanos , Giro do Cíngulo/fisiologia , Tempo de Reação/fisiologia , Emoções/fisiologia , Atenção/fisiologia , Imageamento por Ressonância Magnética/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...